Goto

Collaborating Authors

 negative stereotype


Evaluating Large Language Models through Gender and Racial Stereotypes

Malik, Ananya

arXiv.org Artificial Intelligence

Language Models have ushered a new age of AI gaining traction within the NLP community as well as amongst the general population. AI's ability to make predictions, generations and its applications in sensitive decision-making scenarios, makes it even more important to study these models for possible biases that may exist and that can be exaggerated. We conduct a quality comparative study and establish a framework to evaluate language models under the premise of two kinds of biases: gender and race, in a professional setting. We find out that while gender bias has reduced immensely in newer models, as compared to older ones, racial bias still exists.


Revealed: What the average people in 13 UK counties look like, according to AI - so do YOU agree?

Daily Mail - Science & tech

The UK is home to 92 counties, each with its own distinctive look and feel. Now, a film editor has tasked artificial intelligence (AI) with putting faces to these counties - with hilarious results. Duncan Thomsen, 53, used the software Midjourney to create images of'average people' in 13 counties. The results suggest that the average residents in County Antrim are young with red hair, while people living in Anglesey are elderly (and wrapped up for the cold weather!). So, do you agree with what AI thinks the average people look like in your county?


MailOnline asks ChatGPT to come up with a stereotype for residents in all UK counties

Daily Mail - Science & tech

ChatGPT has revealed some scathing stereotypes of UK residents in a merciless study of what clichés exist in every county. The cutting-edge bot labeled Yorkshiremen as'rude' while Londoners were slammed for their arrogance in the nationwide analysis. The truly insulting results came after MailOnline asked ChatGPT to expose what'negative stereotypes' exist of people from our nation. While the bot insisted that it did not condone stereotypes, it offered a list of those associated with each place when prompted. On the whole, residents of the UK were deemed to have bad teeth while being overly polite and obsessed with the Royal Family.


Here is what ChatGPT thinks of people in every US state

Daily Mail - Science & tech

ChatGPT has been accused of being woke and shying away from offensive feedback -- but not when it comes to negative stereotypes about Americans. ChatGPT stated that people in Alabama are'hillbillies', Idahoans are'gun-touting survivalists', Wisconsinites are'heavy drinkers' and people in Iowa are just plain'boring'. When it came to the most populous states, the AI said New Yorkers are rude, Californians are superficial, Texans are pro-gun, Floridians are crazy and Pennsylvanians are unwelcoming to outsiders. However, not all stereotypes were offensive. The bot described Ohioans as down-to-earth, New Mexicans as spiritual, residents in Oregon as hipsters and Nebraskans as friendly.


Words of Wisdom: Representational Harms in Learning From AI Communication

Buddemeyer, Amanda, Walker, Erin, Alikhani, Malihe

arXiv.org Artificial Intelligence

Many educational technologies use artificial intelligence (AI) that presents generated or produced language to the learner. We contend that all language, including all AI communication, encodes information about the identity of the human or humans who contributed to crafting the language. With AI communication, however, the user may index identity information that does not match the source. This can lead to representational harms if language associated with one cultural group is presented as "standard" or "neutral", if the language advantages one group over another, or if the language reinforces negative stereotypes. In this work, we discuss a case study using a Visual Question Generation (VQG) task involving gathering crowdsourced data from targeted demographic groups. Generated questions will be presented to human evaluators to understand how they index the identity behind the language, whether and how they perceive any representational harms, and how they would ideally address any such harms caused by AI communication. We reflect on the educational applications of this work as well as the implications for equality, diversity, and inclusion (EDI).


Louisville gamer startup is changing the negative stereotypes around video games

USATODAY - Tech Top Stories

A link has been posted to your Facebook feed. LOUISVILLE – The culture surrounding video games is often shrouded in stereotypes and negative connotations. How often have we heard the narrative, especially following mass shootings in America, that video games are linked to violent behavior. Or that people who play video games are "basement dwellers" with no life. Or even the idea that all video game developers are Silicon Valley tech bros and it's a "man's world."


Being smarter means you are more likely to use stereotypes

Daily Mail - Science & tech

People with higher cognitive abilities are often better able to spot patterns in the world around them, allowing them to excel in a wide range of tasks, from learning languages to recognizing faces. But, in some situations, even being intelligent has its drawbacks. A new study has found that these people are more likely to stereotype others based on the patterns they detect, potentially leading to negative consequences as they perpetuate social biases. A new study found that people with higher cognitive abilities are more likely to stereotype others. In the study, the researchers manipulated image-description pairings so that the faces with particular features were linked to negative stereotypes.


When bias in product design means life or death

#artificialintelligence

Carol E. Reiley is the co-founder and president of Drive.ai. She previously founded Tinkerbelle Laboratories. During my Ph.D. studies, I developed a voice-activated human-robot interface for a surgical robotic system using Microsoft's speech recognition API. But, because the API had been built mainly by 20-30-year-old men, it did not recognize my voice. I had to lower my pitch in order for it to work.


When bias in product design means life or death

#artificialintelligence

Carol E. Reiley is the co-founder and president of Drive.ai. She previously founded Tinkerbelle Laboratories. During my Ph.D. studies, I developed a voice-activated human-robot interface for a surgical robotic system using Microsoft's speech recognition API. But, because the API had been built mainly by 20-30-year-old men, it did not recognize my voice. I had to lower my pitch in order for it to work.